30 research outputs found

    ArgMine: Argumentation Mining from Text

    Get PDF
    O objetivo da prospeção de argumentos a partir de texto é a deteção e identificação de forma automática da estrutura argumentativa contida num texto escrito em linguagem natural.Um argumento é uma estrutura retórica que é estudada desde à muitos anos e que se encontra bem fundamentada. De uma forma geral, argumentos são posições justificáveis onde factos (premissas) são apresentados em suporte de uma conclusão.A ambiguidade do texto escrito em linguagem natural, diferentes estilos de escrita, contexto implícito e a complexidade em construir estruturas argumentativas são alguns dos desafios que fazem desta tarefa muito desafiadora.Extraindo de forma automática argumentos a partir de texto, somos capazes de saber não apenas quais são os pontos de vista que estão a ser expressos, mas também quais são as razões para acreditar nesse pontos de vista. Assim sendo, a prospeção de argumentos de forma automática tem o potencial de trazer avanços em algumas áreas de investigação tais como prospeção de opiniões, sistemas de recomendação e sistemas multi-agente.A tarefa completa de prospeção de argumentos pode ser decomposta em várias sub-tarefas. Esta tese aborda a deteção e identificação, de forma automática, dos componentes argumentativos presentes no texto. Isso envolve detetar as zonas do texto que contêm conteúdo argumentativo e, a seleção dos fragmentos de texto que correspondem às unidades elementares de argumentos. Para que seja possível de uma forma automática detetar e identificar componentes argumentativos a partir de texto, algoritmos de aprendizagem máquina supervisionada serão usados.O conjunto de dados alvo que será usado para treinar os algoritmos são noticias escritas na língua Portuguesa.The aim of argumentation mining is the automatic detection and identification of the argumentative structure contained within a piece of natural language text. An argument is an ancient and well studied rhetorical structure. In a general form, arguments are justifiable positions where pieces of evidence (premises) are offered in support of a conclusion. The ambiguity of natural language text, different writing styles, implicit context and the complexity of building argument structures are some of the challenges which make this task very challenging. By automatically extracting arguments from text, we are able to tell not just what views are being expressed, but also what are the reasons to believe those particular views. Therefore, argumentation mining has the potential to improve some research topics such as opinion mining, recommender systems and multi-agent systems. The full task of argumentation mining can be decomposed into several subtasks. This thesis focuses on the automatic detection and identification of the argumentative components presented in the original text. This involves detecting the zones of text that contain argumentative content and the identification of fragments of text that will form the elementary units of the argument. In order to automatically detect and identify argumentative components in text, supervised machine learning algorithms will be used. The target corpus used to train the algorithms are news written in Portuguese language

    A922 Sequential measurement of 1 hour creatinine clearance (1-CRCL) in critically ill patients at risk of acute kidney injury (AKI)

    Get PDF
    Meeting abstrac

    NEOTROPICAL XENARTHRANS: a data set of occurrence of xenarthran species in the Neotropics

    No full text
    Xenarthrans—anteaters, sloths, and armadillos—have essential functions for ecosystem maintenance, such as insect control and nutrient cycling, playing key roles as ecosystem engineers. Because of habitat loss and fragmentation, hunting pressure, and conflicts with domestic dogs, these species have been threatened locally, regionally, or even across their full distribution ranges. The Neotropics harbor 21 species of armadillos, 10 anteaters, and 6 sloths. Our data set includes the families Chlamyphoridae (13), Dasypodidae (7), Myrmecophagidae (3), Bradypodidae (4), and Megalonychidae (2). We have no occurrence data on Dasypus pilosus (Dasypodidae). Regarding Cyclopedidae, until recently, only one species was recognized, but new genetic studies have revealed that the group is represented by seven species. In this data paper, we compiled a total of 42,528 records of 31 species, represented by occurrence and quantitative data, totaling 24,847 unique georeferenced records. The geographic range is from the southern United States, Mexico, and Caribbean countries at the northern portion of the Neotropics, to the austral distribution in Argentina, Paraguay, Chile, and Uruguay. Regarding anteaters, Myrmecophaga tridactyla has the most records (n = 5,941), and Cyclopes sp. have the fewest (n = 240). The armadillo species with the most data is Dasypus novemcinctus (n = 11,588), and the fewest data are recorded for Calyptophractus retusus (n = 33). With regard to sloth species, Bradypus variegatus has the most records (n = 962), and Bradypus pygmaeus has the fewest (n = 12). Our main objective with Neotropical Xenarthrans is to make occurrence and quantitative data available to facilitate more ecological research, particularly if we integrate the xenarthran data with other data sets of Neotropical Series that will become available very soon (i.e., Neotropical Carnivores, Neotropical Invasive Mammals, and Neotropical Hunters and Dogs). Therefore, studies on trophic cascades, hunting pressure, habitat loss, fragmentation effects, species invasion, and climate change effects will be possible with the Neotropical Xenarthrans data set. Please cite this data paper when using its data in publications. We also request that researchers and teachers inform us of how they are using these data

    Characterisation of microbial attack on archaeological bone

    Get PDF
    As part of an EU funded project to investigate the factors influencing bone preservation in the archaeological record, more than 250 bones from 41 archaeological sites in five countries spanning four climatic regions were studied for diagenetic alteration. Sites were selected to cover a range of environmental conditions and archaeological contexts. Microscopic and physical (mercury intrusion porosimetry) analyses of these bones revealed that the majority (68%) had suffered microbial attack. Furthermore, significant differences were found between animal and human bone in both the state of preservation and the type of microbial attack present. These differences in preservation might result from differences in early taphonomy of the bones. © 2003 Elsevier Science Ltd. All rights reserved

    Brazilian Flora 2020: Leveraging the power of a collaborative scientific network

    No full text
    International audienceThe shortage of reliable primary taxonomic data limits the description of biological taxa and the understanding of biodiversity patterns and processes, complicating biogeographical, ecological, and evolutionary studies. This deficit creates a significant taxonomic impediment to biodiversity research and conservation planning. The taxonomic impediment and the biodiversity crisis are widely recognized, highlighting the urgent need for reliable taxonomic data. Over the past decade, numerous countries worldwide have devoted considerable effort to Target 1 of the Global Strategy for Plant Conservation (GSPC), which called for the preparation of a working list of all known plant species by 2010 and an online world Flora by 2020. Brazil is a megadiverse country, home to more of the world's known plant species than any other country. Despite that, Flora Brasiliensis, concluded in 1906, was the last comprehensive treatment of the Brazilian flora. The lack of accurate estimates of the number of species of algae, fungi, and plants occurring in Brazil contributes to the prevailing taxonomic impediment and delays progress towards the GSPC targets. Over the past 12 years, a legion of taxonomists motivated to meet Target 1 of the GSPC, worked together to gather and integrate knowledge on the algal, plant, and fungal diversity of Brazil. Overall, a team of about 980 taxonomists joined efforts in a highly collaborative project that used cybertaxonomy to prepare an updated Flora of Brazil, showing the power of scientific collaboration to reach ambitious goals. This paper presents an overview of the Brazilian Flora 2020 and provides taxonomic and spatial updates on the algae, fungi, and plants found in one of the world's most biodiverse countries. We further identify collection gaps and summarize future goals that extend beyond 2020. Our results show that Brazil is home to 46,975 native species of algae, fungi, and plants, of which 19,669 are endemic to the country. The data compiled to date suggests that the Atlantic Rainforest might be the most diverse Brazilian domain for all plant groups except gymnosperms, which are most diverse in the Amazon. However, scientific knowledge of Brazilian diversity is still unequally distributed, with the Atlantic Rainforest and the Cerrado being the most intensively sampled and studied biomes in the country. In times of “scientific reductionism”, with botanical and mycological sciences suffering pervasive depreciation in recent decades, the first online Flora of Brazil 2020 significantly enhanced the quality and quantity of taxonomic data available for algae, fungi, and plants from Brazil. This project also made all the information freely available online, providing a firm foundation for future research and for the management, conservation, and sustainable use of the Brazilian funga and flora

    Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora

    No full text
    International audienceThe Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/cc charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1±0.6\pm0.6% and 84.1±0.6\pm0.6%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation

    Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora

    No full text
    International audienceThe Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/cc charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1±0.6\pm0.6% and 84.1±0.6\pm0.6%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation

    Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora

    No full text
    International audienceThe Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/cc charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1±0.6\pm0.6% and 84.1±0.6\pm0.6%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation

    Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora

    No full text
    International audienceThe Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/cc charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1±0.6\pm0.6% and 84.1±0.6\pm0.6%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems that facilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment.This document describes the conceptual design for the Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE). The goals of the experiment include 1) studying neutrino oscillations using a beam of neutrinos sent from Fermilab in Illinois to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, 2) studying astrophysical neutrino sources and rare processes and 3) understanding the physics of neutrino interactions in matter. We describe the development of the computing infrastructure needed to achieve the physics goals of the experiment by storing, cataloging, reconstructing, simulating, and analyzing \sim 30 PB of data/year from DUNE and its prototypes. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. We describe the physics objectives, organization, use cases, and proposed technical solutions
    corecore